24 research outputs found

    The role of context in human memory augmentation

    Get PDF
    Technology has always had a direct impact on what humans remember. In the era of smartphones and wearable devices, people easily capture on a daily basis information and videos, which can help them remember past experiences and attained knowledge, or simply evoke memories for reminiscing. The increasing use of such ubiquitous devices and technologies produces a sheer volume of pictures and videos that, in combination with additional contextual information, could potentially significantly improve one’s ability to recall a past experience and prior knowledge. Calendar entries, application use logs, social media posts, and activity logs comprise only a few examples of such potentially memory-supportive additional information. This work explores how such memory-supportive information can be collected, filtered, and eventually utilized, for generating memory cues, fragments of past experience or prior knowledge, purposed for triggering one’s memory recall. In this thesis, we showcase how we leverage modern ubiquitous technologies as a vessel for transferring established psychological methods from the lab into the real world, for significantly and measurably augmenting human memory recall in a diverse set of often challenging contexts. We combine experimental evidence garnered from numerous field and lab studies, with knowledge amassed from an extensive literature review, for substantially informing the design and development of future pervasive memory augmentation systems. Ultimately, this work contributes to the fundamental understanding of human memory and how today’s modern technologies can be actuated for augmenting it

    EmoSnaps: a mobile application for emotion recall from facial expressions

    No full text
    We introduce EmoSnaps, a mobile application that captures unobtrusively pictures of one's facial expressions throughout the day and uses them for later recall of her momentary emotions. We describe two field studies that employ EmoSnaps in an attempt to investigate if and how individuals and their relevant others infer emotions from self-face and familiar face pictures, respectively. Study 1 contrasted users' recalled emotions as inferred from EmoSnaps' self-face pictures to ground truth data as derived from Experience Sampling. Contrary to our expectations, we found that people are better able to infer their past emotions from a self-face picture the longer the time has elapsed since capture. Study 2 assessed EmoSnaps' ability to capture users' experiences while interacting with different mobile apps. The study revealed systematic variations in users' emotions while interacting with different categories ofmobile apps (such as productivity and entertainment), social networking services, as well as direct social communications through phone calls and instant messaging, but also diurnal and weekly patterns of happiness as inferred from EmoSnaps' self-face pictures. All in all, the results of both studies provided us with confidence over the validity of self-face pictures captured through EmoSnaps as memory cues for emotion recall, and the effectiveness of the EmoSnaps tool in measuring users' momentary experiences

    EmoSnaps: a mobile application for emotion recall from facial expressions

    No full text
    We introduce EmoSnaps, a mobile application that captures unobtrusively pictures of one’s facial expressions throughout the day and uses them for later recall of her momentary emotions. We describe two field studies that employ EmoSnaps in an attempt to investigate if and how individuals and their relevant others infer emotions from self-face and familiar face pictures, respectively. Study 1 contrasted users’ recalled emotions as inferred from EmoSnaps’ self-face pictures to ground truth data as derived from Experience Sampling. Contrary to our expectations, we found that people are better able to infer their past emotions from a self-face picture the longer the time has elapsed since capture. Study 2 assessed EmoSnaps’ ability to capture users’ experiences while interacting with different mobile apps. The study revealed systematic variations in users’ emotions while interacting with different categories of mobile apps (such as productivity and entertainment), social networking services, as well as direct social communications through phone calls and instant messaging, but also diurnal and weekly patterns of happiness as inferred from EmoSnaps’ self-face pictures. All in all, the results of both studies provided us with confidence over the validity of self-face pictures captured through EmoSnaps as memory cues for emotion recall, and the effectiveness of the EmoSnaps tool in measuring users’ momentary experiences

    PLBSD: a Platform for Proactive Location-based Service Discovery

    No full text
    We introduce a platform for the rapid prototyping of proactive location-based service discovery, proactive location-based services are conceptualised along three broad categories: location-triggered services chain-triggered services, proximity-triggered services, and illustrated through a number of usage scenarios. We report on a workshop with designers and researchers in the area of location-based services that resulted in a set of initial requirements for the platform. We describe how the platform aims at addressing these requirements, and illustrate the implemented features through the development of a proactive location-based application

    Endowing Head-Mounted Displays with Physiological Sensing for Augmenting Human Learning and Cognition

    No full text
    The EEGlass prototype is a merger between a Head-Mounted Display (HMD) and a brain-sensing platform with a set of electroencephalography (EEG) electrodes at the contact points with the skull. EEGlass measures unobtrusively the activity of the human brain facilitating the interaction with HMDs for augmenting human cognition. Among others, EEGlass is intended for collection of context-aware EEG measurements, supporting learning and cognitive experiments outside the laboratory environment. Thus, we expect EEGlass will promote the implementation and application of ecologically valid research methods (studies in the user's natural context)

    Exploring EEG signals during the different phases of game-player interaction

    No full text
    Games are nowadays used to enhance different learning and teaching practices in institutions, companies and other venues. Factors that increase the adoption and integration of learning games have been widely studied in the past. However, the effect of different backgrounds and designs on learners'/players' electroencephalographic (EEG) signals during game-play remains under-explored. These insights may enable us to design and utilize games in a way that adapts to users' cognitive abilities and facilitates learning. In this paper, we describe a controlled study consisted of 251 game sessions and 17 players that focused on skill development (i.e., user's ability to master complex tasks), while collecting EEG and game-play data. Our results unveiled factors that relate to the game-phases and learners'/players' expertise and affect their mental effort when playing a learning game. In particular, our analysis showed an effect of players background (experience and performance) and games design (number of attempts/lives and difficulty) on players mental effort during the game-play. Finally, we discussed how such effects could benefit the design and application of games for learning as well as, directions for future research

    EEGlass: An EEG-Eyeware Prototype for Ubiquitous Brain-Computer Interaction

    No full text
    Contemporary Head-Mounted Displays (HMDs) are progressively becoming socially acceptable by approaching the size and design of normal eyewear. Apart from the exciting interaction design prospects, HMDs bear significant potential in hosting an array of physiological sensors very adjacent to the human skull. As a proof of concept, we illustrate EEGlass, an early wearable prototype comprised of plastic eyewear frames for approximating the form factor of a modern HMD. EEGlass is equipped with an Open-BCI board and a set of EEG electrodes at the contact points with the skull for unobtrusively collecting data related to the activity of the human brain. We tested our prototype with 1 participant performing cognitive and sensorimotor tasks while wearing an established Electroencephalography (EEG) device for obtaining a baseline. Our preliminary results showcase that EEGlass is capable of accurately capturing resting state, detect motor-action and Electrooculographic (EOG) artifacts. Further experimentation is required, but our early trials with EEGlass are promising in that HMDs could serve as a springboard for moving EEG outside of the lab and in our everyday life, facilitating the design of neuroadaptive systems

    Does Locality Make a Difference? Assessing the Effectiveness of Location-aware Narratives

    No full text
    With the increasing sophistication of mobile computing, a growing interest has been paid to locative media that aim at providing immersive experiences. Location aware narratives are a particular kind of locative media that aim at "telling stories that unfold in real space". This paper presents a study that aimed at assessing an underlying hypothesis of location-aware narratives: that the coupling between the physical space and the narrative will result in increased levels of immersion in the narrative. Forty-five individuals experienced a location-aware video narrative in three locations: (a) the original location that contains physical cues from the narrative world, (b) a different location that yet portrays a similar atmosphere, and (c) a location that contains neither physical cues nor a similar atmosphere. Significant differences were found in users' experiences with the narrative in terms of immersion in the story and mental imagery, but not with regard to feelings of presence, emotional involvement or the memorability of story elements. We reflect on these findings and the implications for the design of location-aware narratives and highlight questions for further research
    corecore